Goto

Collaborating Authors

 computing valid p-value


Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

Neural Information Processing Systems

Although there is a vast body of literature related to methods for detecting change-points (CPs), less attention has been paid to assessing the statistical reliability of the detected CPs. In this paper, we introduce a novel method to perform statistical inference on the significance of the CPs, estimated by a Dynamic Programming (DP)-based optimal CP detection algorithm. Our main idea is to employ a Selective Inference (SI) approach---a new statistical inference framework that has recently received a lot of attention---to compute exact (non-asymptotic) valid p-values for the detected optimal CPs. Although it is well-known that SI has low statistical power because of over-conditioning, we address this drawback by introducing a novel method called parametric DP, which enables SI to be conducted with the minimum amount of conditioning, leading to high statistical power. We conduct experiments on both synthetic and real-world datasets, through which we offer evidence that our proposed method is more powerful than existing methods, has decent performance in terms of computational efficiency, and provides good results in many practical applications.


Review for NeurIPS paper: Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

Neural Information Processing Systems

Summary and Contributions: Update after reading the author rebuttal: I'd like to thank the authors for their detailed rebuttal. I appreciate that they aim to expand the experimental section and related work, and simplify the notation and descriptions where possible. I certainly think these changes can make the paper even more accessible and hope that the authors will also incorporate the other suggestions made to improve the manuscript. With regards to the runtime shown in Figure 7, I am still not convinced this is "almost linear": connecting the mean of the boxplots does not seem to give a linear relationship, but more likely a quadratic one. I'd recommend the authors fit a polynomial to this data and work out whether it is linear or quadratic in the number of samples.


Review for NeurIPS paper: Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

Neural Information Processing Systems

This paper describes a method for estimating the p-values associated with the change points detected through optimal partitioning. The reviewers were unanimous in their vote to accept. Someone stated that it was a "great read and a great method" and wants to use their code.


Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

Neural Information Processing Systems

Although there is a vast body of literature related to methods for detecting change-points (CPs), less attention has been paid to assessing the statistical reliability of the detected CPs. In this paper, we introduce a novel method to perform statistical inference on the significance of the CPs, estimated by a Dynamic Programming (DP)-based optimal CP detection algorithm. Our main idea is to employ a Selective Inference (SI) approach---a new statistical inference framework that has recently received a lot of attention---to compute exact (non-asymptotic) valid p-values for the detected optimal CPs. Although it is well-known that SI has low statistical power because of over-conditioning, we address this drawback by introducing a novel method called parametric DP, which enables SI to be conducted with the minimum amount of conditioning, leading to high statistical power. We conduct experiments on both synthetic and real-world datasets, through which we offer evidence that our proposed method is more powerful than existing methods, has decent performance in terms of computational efficiency, and provides good results in many practical applications.


Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming

Duy, Vo Nguyen Le, Toda, Hiroki, Sugiyama, Ryota, Takeuchi, Ichiro

arXiv.org Machine Learning

There is a vast body of literature related to methods for detecting changepoints (CP). However, less attention has been paid to assessing the statistical reliability of the detected CPs. In this paper, we introduce a novel method to perform statistical inference on the significance of the CPs, estimated by a Dynamic Programming (DP)-based optimal CP detection algorithm. Based on the selective inference (SI) framework, we propose an exact (non-asymptotic) approach to compute valid p-values for testing the significance of the CPs. Although it is well-known that SI has low statistical power because of over-conditioning, we address this disadvantage by introducing parametric programming techniques. Then, we propose an efficient method to conduct SI with the minimum amount of conditioning, leading to high statistical power. We conduct experiments on both synthetic and real-world datasets, through which we offer evidence that our proposed method is more powerful than existing methods, has decent performance in terms of computational efficiency, and provides good results in many practical applications.